g - priors for Linear Regression
نویسندگان
چکیده
where X is the design matrix, ∼ N (0, σI), and β ∼ N (β0, gσ(XTX)−1). The prior on σ is the Jeffreys prior, π(σ) ∝ 1 σ2 , and usually, β0 is taken to be 0 for simplification purposes. The appeal of the method is that there is only one free parameter g for all linear regression. Furthermore, the simplicity of the g-prior model generally leads to easily obtained analytical results. However, we still face the problem of selecting g or a prior for g, and this lecture provides an overview of the issues that come up.
منابع مشابه
Bayesian Inference for Spatial Beta Generalized Linear Mixed Models
In some applications, the response variable assumes values in the unit interval. The standard linear regression model is not appropriate for modelling this type of data because the normality assumption is not met. Alternatively, the beta regression model has been introduced to analyze such observations. A beta distribution represents a flexible density family on (0, 1) interval that covers symm...
متن کاملApproximate Bayesian Model Selection with the Deviance Statistic
Bayesian model selection poses two main challenges: the specification of parameter priors for all models, and the computation of the resulting Bayes factors between models. There is now a large literature on automatic and objective parameter priors in the linear model. One important class are g-priors, which were recently extended from linear to generalized linear models (GLMs). We show that th...
متن کاملAn Information Matrix Prior for Bayesian Analysis in Generalized Linear Models with High Dimensional Data.
An important challenge in analyzing high dimensional data in regression settings is that of facing a situation in which the number of covariates p in the model greatly exceeds the sample size n (sometimes termed the "p > n" problem). In this article, we develop a novel specification for a general class of prior distributions, called Information Matrix (IM) priors, for high-dimensional generaliz...
متن کاملMaximum a posterior linear regression with elliptically symmetric matrix variate priors
In this paper, elliptic symmetric matrix variate distribution is proposed as the prior distribution for maximum a posterior linear regression (MAPLR) based model adaptation. The exact close form solution of MAPLR with elliptically symmetric matrix variate priors is obtained. The effects of the proposed prior in MAPLR are characterized and compared with conventional maximum likelihood linear reg...
متن کاملInference from Intrinsic Bayes’ Procedures Under Model Selection and Uncertainty
In this paper we present a fully coherent and consistent objective Bayesian analysis of the linear regression model using intrinsic priors. The intrinsic prior is a scaled mixture of g-priors and promotes shrinkage towards the subspace defined by a base (or null) model. While it has been established that the intrinsic prior provides consistent model selectors across a range of models, the poste...
متن کامل